219 research outputs found
Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges
Human-swarm interaction (HSI) involves a number of human factors impacting
human behaviour throughout the interaction. As the technologies used within HSI
advance, it is more tempting to increase the level of swarm autonomy within the
interaction to reduce the workload on humans. Yet, the prospective negative
effects of high levels of autonomy on human situational awareness can hinder
this process. Flexible autonomy aims at trading-off these effects by changing
the level of autonomy within the interaction when required; with
mixed-initiatives combining human preferences and automation's recommendations
to select an appropriate level of autonomy at a certain point of time. However,
the effective implementation of mixed-initiative systems raises fundamental
questions on how to combine human preferences and automation recommendations,
how to realise the selected level of autonomy, and what the future impacts on
the cognitive states of a human are. We explore open challenges that hamper the
process of developing effective flexible autonomy. We then highlight the
potential benefits of using system modelling techniques in HSI by illustrating
how they provide HSI designers with an opportunity to evaluate different
strategies for assessing the state of the mission and for adapting the level of
autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling
Conference, Canberra, Australi
Behavioral Learning of Aircraft Landing Sequencing Using a Society of Probabilistic Finite State Machines
Air Traffic Control (ATC) is a complex safety critical environment. A tower
controller would be making many decisions in real-time to sequence aircraft.
While some optimization tools exist to help the controller in some airports,
even in these situations, the real sequence of the aircraft adopted by the
controller is significantly different from the one proposed by the optimization
algorithm. This is due to the very dynamic nature of the environment. The
objective of this paper is to test the hypothesis that one can learn from the
sequence adopted by the controller some strategies that can act as heuristics
in decision support tools for aircraft sequencing. This aim is tested in this
paper by attempting to learn sequences generated from a well-known sequencing
method that is being used in the real world. The approach relies on a genetic
algorithm (GA) to learn these sequences using a society Probabilistic
Finite-state Machines (PFSMs). Each PFSM learns a different sub-space; thus,
decomposing the learning problem into a group of agents that need to work
together to learn the overall problem. Three sequence metrics (Levenshtein,
Hamming and Position distances) are compared as the fitness functions in GA. As
the results suggest, it is possible to learn the behavior of the
algorithm/heuristic that generated the original sequence from very limited
information
Towards Trust-Aware Human-Automation Interaction: An Overview of the Potential of Computational Trust Models
Several computational models have been proposed to quantify trust and its relationship to other system variables. However, these models are still under-utilised in human-machine interaction settings due to the gap between modellers’ intent to capture a phenomenon and the requirements for employing the models in a practical context. Our work amalgamates insights from the system modelling, trust, and human-autonomy teaming literature to address this gap. We explore the potential of computational trust models in the development of trust-aware systems by investigating three research questions: 1- At which stages of development can trust models be used by designers? 2- how can trust models contribute to trust-aware systems? 3- which factors should be incorporated within trust models to enhance models’ effectiveness and usability? We conclude with future research directions
Contextually Aware Intelligent Control Agents for Heterogeneous Swarms
An emerging challenge in swarm shepherding research is to design effective
and efficient artificial intelligence algorithms that maintain a
low-computational ceiling while increasing the swarm's abilities to operate in
diverse contexts. We propose a methodology to design a context-aware
swarm-control intelligent agent. The intelligent control agent (shepherd) first
uses swarm metrics to recognise the type of swarm it interacts with to then
select a suitable parameterisation from its behavioural library for that
particular swarm type. The design principle of our methodology is to increase
the situation awareness (i.e. information contents) of the control agent
without sacrificing the low-computational cost necessary for efficient swarm
control. We demonstrate successful shepherding in both homogeneous and
heterogeneous swarms.Comment: 37 pages, 3 figures, 11 table
- …